Moderate: Red Hat Ceph Storage 4.3 Security and Bug Fix update

Related Vulnerabilities: CVE-2020-25658   CVE-2021-3524   CVE-2021-3979  

Synopsis

Moderate: Red Hat Ceph Storage 4.3 Security and Bug Fix update

Type/Severity

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

New packages for Red Hat Ceph Storage 4.3 are now available on Red Hat Enterprise Linux 8.5.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • python-rsa: bleichenbacher timing oracle attack against RSA decryption (CVE-2020-25658)
  • ceph object gateway: radosgw: CRLF injection (CVE-2021-3524)
  • ceph: Ceph volume does not honour osd_dmcrypt_key_size (CVE-2021-3979)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

These new packages include numerous bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4.3/html/release_notes/index

All users of Red Hat Ceph Storage are advised to upgrade to these new packages, which provide bug fixes.

Solution

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

For supported configurations, refer to:

https://access.redhat.com/articles/1548993

Affected Products

  • Red Hat Ceph Storage MON 4 for RHEL 8 x86_64
  • Red Hat Ceph Storage MON 4 for RHEL 7 x86_64
  • Red Hat Ceph Storage OSD 4 for RHEL 8 x86_64
  • Red Hat Ceph Storage OSD 4 for RHEL 7 x86_64
  • Red Hat Enterprise Linux for x86_64 8 x86_64
  • Red Hat Enterprise Linux Server 7 x86_64
  • Red Hat Ceph Storage for Power 4 for RHEL 8 ppc64le
  • Red Hat Ceph Storage for Power 4 for RHEL 7 ppc64le
  • Red Hat Ceph Storage MON for Power 4 for RHEL 8 ppc64le
  • Red Hat Ceph Storage MON for Power 4 for RHEL 7 ppc64le
  • Red Hat Ceph Storage OSD for Power 4 for RHEL 8 ppc64le
  • Red Hat Ceph Storage OSD for Power 4 for RHEL 7 ppc64le
  • Red Hat Ceph Storage for IBM z Systems 4 s390x
  • Red Hat Ceph Storage MON for IBM z Systems 4 s390x
  • Red Hat Ceph Storage OSD for IBM z Systems 4 s390x

Fixes

  • BZ - 1786691 - [ceph-ansible][RFE] support for Grafana-server purge.yml
  • BZ - 1855350 - non-fatal yum install error of nfs-ganesha
  • BZ - 1876860 - [RFE] [ceph-ansible] Add osd_auto_discovery support in purge-cluster.yml playbook
  • BZ - 1889972 - CVE-2020-25658 python-rsa: bleichenbacher timing oracle attack against RSA decryption
  • BZ - 1891557 - ceph-volume ignores osd_mount_options_xfs
  • BZ - 1894038 - pybind/mgr/volumes: Make number of cloner threads configurable
  • BZ - 1896803 - [cee/sd][ceph-volume] when running playbook add-osd.yml or site.yml ceph-volume does not create OSDs on new devices
  • BZ - 1902999 - [RFE][ceph-ansible] Include configuration parameters for alertmanager in ceph-ansible
  • BZ - 1906022 - [RFE] [ceph-ansible] : ceph-validate : Validate devices mentioned in lvm_volumes
  • BZ - 1927574 - [RFE] Allow ceph dashboard IP to be set via all.yml
  • BZ - 1936299 - [GSS] ceph dashboard certification alert "x509: certificate signed by unknown authority"
  • BZ - 1941775 - [RFE] Allow setting global nfs-ganesha options
  • BZ - 1951674 - CVE-2021-3524 ceph object gateway: radosgw: CRLF injection
  • BZ - 1952571 - [GSS][ceph-ansible][RFE] Additional pre-check for mon quorum failures while running rolling_update.yml playbook
  • BZ - 1955038 - [RFE] Include radosgw-admin sync status in Ceph-Dashboard Grafana
  • BZ - 1960306 - Some tempest object_storage negative tests fail when RGW returns a 404 error and the tests expect a 401
  • BZ - 1962748 - [ceph-container] support the systemctl target units it will help to manage all the daemons at once in a given host
  • BZ - 1964097 - [RFE] RGW Multi-site data sync optimizations
  • BZ - 1964099 - RGW service failed when upgraded to pacific from nautilus - error creating FIFO
  • BZ - 1965314 - [RFE] client latency getting impacted after bucket reaching max size quota
  • BZ - 1965504 - [cee/sd][ceph-ansible][mutli-site] ceph-ansible does not correctly set zone endpoints when https is set
  • BZ - 1965540 - [RFE] Add the role being assumed by the user to the RGW opslogs when using STS assumerole
  • BZ - 1967532 - [RGW]: Versioned delete using expiration creates multiple delete markers
  • BZ - 1975102 - [RADOS] ceph mon stat --format json-pretty return output in plain format
  • BZ - 1978643 - Rolling upgrade from 4.2 to 5.x failed during mon upgrade
  • BZ - 1979186 - Snapshot based mirroring not working
  • BZ - 1981860 - [cee/sd][rgw] radosgw-admin datalog trim --shard-id <ID> --end-marker <marker> ends with Segmentation fault
  • BZ - 1986684 - [RBD] - rbd trash purge command not deleting images in trash
  • BZ - 1987041 - RGW container fails to start when rgw thread pool size is close to 2048
  • BZ - 1988171 - set a non-zero default value for osd_client_message_cap
  • BZ - 1990772 - Running add-osd.yml in containerised env when MONs run OSD daemons causes Ansible playbook to fail at Task "unset noup flag" because it will run MON command on MON host with wrong container name
  • BZ - 1992178 - [Dashboard] mgr/dashboard: Grafana "Ceph-cluster" Client connections shows a huge/unrealistic value
  • BZ - 1992246 - [ceph-dashbaord][grafana] The performance graphs for OSDs are not displaying
  • BZ - 1994930 - [gss][ceph-ansible][iscsigw]ceph-ansible automatically appending trusted_ip_list=192.168.122.1 in iscsi-gateway.cfg
  • BZ - 1995037 - [Ceph-Ansible]: Failure in switch-from-non-containerized-to-containerized-ceph-daemons playbook
  • BZ - 1995562 - rgw sts memory leak
  • BZ - 1995574 - [Ceph-Ansible] site-container.yml playbook does not pull the Ceph Dashboard container images behind a proxy
  • BZ - 1996765 - During upgrade from 3.x to 4. x - LC intermittently errors with 400 on setlifecycle from s3cmd client
  • BZ - 1997586 - [cee/sd][ceph-dashboard] Ceph dashboard does not support NFS ganesha.
  • BZ - 2001444 - [ceph-dashboard] In branding about page should update to 4.3
  • BZ - 2002084 - [RGW] Malformed ETag found in the XML data (list of objects in a bucket)
  • BZ - 2002261 - [GSS] panel of Network Load - Top 10 Hosts show wrong data in ceph-dashboard
  • BZ - 2003212 - [Bluestore] Remove the possibility of replay log and file inconsistency
  • BZ - 2003219 - Allow recreating file system with specific fscid
  • BZ - 2004738 - [cee/sd][MGR][insights] the insights command is logging into ceph.audit.log excessively - "[{"prefix":"config-key set","key":"mgr/insights/health_history/ ...
  • BZ - 2006166 - [Tracker][Storage Workload DFG] [RHCS 4.3] Release-based Workload Testing
  • BZ - 2006686 - rgw sts memory leak
  • BZ - 2006805 - [4.3][rgw-multisite]: Metadata sync is behind by 5 shards on the secondary and does not find a new marker to advance to.
  • BZ - 2006912 - Notify timeout can induce a race condition as it attempts to resend the cache update
  • BZ - 2006984 - [4.3][RGW-MS]: Metadata sync is stuck on the secondary site.
  • BZ - 2008860 - rgw: With policy specifying invalid arn, users can list content of any bucket
  • BZ - 2009516 - Bluestore repair might erroneously remove SharedBlob entries.
  • BZ - 2011451 - [RGW] Malformed eTag follow up fix after bz1995365
  • BZ - 2014304 - Rolling Upgrade issue --limit option not working
  • BZ - 2016994 - rgw-multisite/upgrade: Observing segfault in thread_name:radosgw on latest RHCS-4.3, and rgws not coming up after the upgrade.
  • BZ - 2017878 - Bucket sync status for tenanted buckets reports 'failed to read source bucket info: (2) No such file or directory'
  • BZ - 2021037 - [RGW][MS]: seg fault seen on thread_name:sync-log-trim
  • BZ - 2021075 - [ceph-mgr]: Invalid argument malformed crash metadata: time data, seen in mgr logs
  • BZ - 2021447 - [ceph-dashboard] Dashboard reporting zero size for RBD images mapped to Linux clients via RBD kernel module
  • BZ - 2021993 - rgw: deleting and purging a bucket can get stuck in an endless loop 4.3
  • BZ - 2022585 - [RGW] bucket stats output has incorrect num_objects in rgw.none and rgw.main on multipart upload
  • BZ - 2022650 - [RGW] reshard cancel errors with (22) Invalid argument
  • BZ - 2023379 - rgw: fix `bi put` not using right bucket index shard 4.3
  • BZ - 2024788 - CVE-2021-3979 ceph: Ceph volume does not honour osd_dmcrypt_key_size
  • BZ - 2027449 - Metadata synchronization failed?"metadata is behind on 1 shards" appear
  • BZ - 2027721 - [RHCS-4.3][RGW][LC][reshard]: on bucket which is resharded LC is not processing
  • BZ - 2027812 - [RGW] Lifecycle is not cleaning up expired objects
  • BZ - 2028248 - [RFE] Allow processing lifecycle for a single bucket only
  • BZ - 2028827 - [4.3][RGW] radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --purge-objects failing crashing on buckets having incomplete multiparts
  • BZ - 2030452 - [RFE]increase HTTP headers size in beast.
  • BZ - 2032560 - [4.3][RGW] radosgw-admin bucket check --fix not removing orphaned entries
  • BZ - 2034595 - [ceph-ansible][dashboard]: ceph-ansible playbook fails to deploy ceph-dashboard with a rgw multi-realm deployment
  • BZ - 2034637 - [RGW] Performing bucket stats after enabling indexless storage policy causes radosgw-admin to crash
  • BZ - 2034999 - [RFE][4.3][dynamic-resharding]: Reshard list should suggest a 'tentative' shard value the bucket can be resharded to.
  • BZ - 2036930 - [4.3][4.3-multi-site] sync-log-trim thread crashes on empty endpoints, should give error and bail
  • BZ - 2036941 - [RGW] client.admin crash seen on executing 'bucket radoslist ' command with indexless buckets
  • BZ - 2038798 - Prometheus alertmanager reports msg="Error on notify" Post https://xxxx:8444/api/prometheus_receiver: x509: cannot validate certificate for XXX because it doesn't contain any IP SANs"
  • BZ - 2039175 - [GSS][rgw] return result 405 MethodNotAllowed for unknown resources
  • BZ - 2040161 - [RBD-Mirror] - Bootstrap import error message " unable to find a keyring on /etc/ceph/..keyring"
  • BZ - 2042585 - [CEE] Client uploaded S3 object not listed but can be downloaded
  • BZ - 2044176 - [4.3][rgw-multisite /LC]: Put bucket lifecycle configuration on non-master zone returns HTTP 503.
  • BZ - 2044370 - [4.3][rgw-multisite][LC]: 10M objects deletion via LC for a bucket leaves 20 objects on the secondary
  • BZ - 2044406 - [4.3][RGW][LC]:10M object deletion via LC for a bucket, shows 26 objects in bucket stats and bucket list even when the objects were deleted.
  • BZ - 2047694 - [cee/sd][ceph-iscsi] Upgrading ceph-iscsi from RHCS 3 to RHCS 4 fails if ceph-iscsi-tools is installed in the cluster. [4.3]
  • BZ - 2052202 - Data race in RGWDataChangesLog::ChangeStatus
  • BZ - 2056719 - [ceph-volume] 'lvm create' must FAIL if specified device has existing partitions
  • BZ - 2056906 - [4.3][LC][rgw-resharding]: rgw crashes while resharding and LC process happen in parallel.
  • BZ - 2058201 - [rgw-multisite][sync run] Segfault in RGWGC::send_chain()
  • BZ - 2063029 - Adding mdss using site.yml with limit fails
  • BZ - 2071137 - Objects from versioned buckets are getting re-appeared and also accessible (able to download) after its deletion through LC
  • BZ - 2076192 - [Ceph-Ansible] ceph installation got failed at TASK[include_role : ceph-facts]
  • BZ - 2077139 - [RGW] Segmentation Fault in Object Deletion code path (RGWDeleteMultiObj)
  • BZ - 2079016 - [RGW]: Crash on MS secondary when sync resumes post error